This article belongs to the debate » The EU AI Act’s Impact on Security Law
09 December 2024

The EU AI Act’s Impact on Security Law

A Debate Series

There’s a supranational law that will influence how border control agencies can screen you upon entering the EU, how financial institutions can monitor your payments, in which spaces you remain entitled to anonymity, how security agencies can predict what crimes you may commit in the future, and what type of evidence can be used against you in criminal trials. That law is Regulation 2024/1689 of 13 June 2024 “laying down harmonised rules on artificial intelligence” – more commonly known as the EU AI Act. The ever-growing significance of AI technologies touches our lives in almost all regards. It is natural then, that “the world’s first comprehensive AI law” will do the same. This debate series focuses on a very particular and sensitive legal domain which will be highly impacted by the EU AI Act: Security Law.

The fact that the EU is in the business of legislating on security matters at all may be surprising to some readers. The European Treaties explicitly state that the Union shall respect Member States’ essential State functions, which include “maintaining law and order and safeguarding national security”. As Walter Hallstein stated in 1965, the EU is distinct from its Member States in that it has “no direct power of coercion, no army and no police.” Yet, the Union has, with national governments’ overwhelming support, used its legislative powers on the internal market and data protection to lay down rules on how European security agencies may use modern technologies for the purpose of protecting public security. Faced with the growing complexity and transnational character of security threats (or at least, the perception thereof), modern policing increasingly relies on such technologies. Regulating technologies means regulating policing. Coupled with the growing role of supranational security agencies and the interoperable databases they administer, this development is driving a tectonic shift in the European security architecture: From a middle-class suburban neighborhood where plots are neatly divided along Member State lines towards an integrated, modern European security high-rise.

The process of integrating European security law is imperfect and unfinished – given the constraints posed by the European Treaties, it is likely to remain that way for the foreseeable future. This inevitable imperfection, lamentable as it may be, creates opportunities for legal scholarship: Legal scholars are needed to explore the gaps and cracks in this new security architecture and to ultimately develop proposals for how to fix them. This debate series, being a product of VB Security and Crime, takes the recently adopted AI Act as an opportunity to do just that: It brings together legal scholars, both German and international, in order to explain, analyze and criticize the EU AI Act’s impact on security law from both an EU and German national law perspective. This dual-perspectival (and also bilingual) approach, I believe, illuminates the complex interdependency of EU and national security law.

 

The AI Act and how it works (in theory)

But first, some legal scene-setting is in order: The AI Act mostly regulates the legal responsibilities that providers and deployers have vis-à-vis their AI systems. An AI system, according to Art. 3 (1) is a “machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. A provider, broadly speaking, is the person or legal entity that develops an AI system, or has it developed and places it on the market, or puts it into service under its own name or trademark (Art. 1 (3)). Deployers operate further down in the line that they are the entities “using an AI system under [their] authority]” outside of “personal non-professional” activities (Art. 1 (4)).

The AI Act doesn’t regulate AI systems tout court but instead mostly adopts a risk-based approach – a risk being “the combination of the probability of an occurrence of harm and the severity of that harm” (Art. 3 (2)). The higher the risk, the heavier the regulative burden on providers and deployers. There are four distinct risk levels: Unacceptable risks, high risks, limited risks and minimal risks. For the latter two categories, the AI Act creates little substantive limitation: AI systems posing limited risks are subject to some transparency obligations (Art. 50) and those posing minimal risks remain outside the AI Act’s scope. It will not surprise you that, due to security law’s sensitivity to fundamental rights, most of the AI systems discussed in this debate series are either in the first or second category – or where they’re not, that in itself will often be an issue.

When an AI system is deemed to produce unacceptable risks, it is prohibited. Such prohibited AI practices are listed in Art. 5. Of particular importance for our purposes here are the prohibitions on AI systems “making assessments of natural persons in order to asses or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person (…)” in Art. 5 (1) lit. d (one could say: pure predictive policing AI), the “untargeted scraping of facial images from the internet or CCTV footage” in lit. e, and, perhaps most importantly, the (exception-riddled) prohibition on “real-time remote biometric identification (RBI) systems in publicly accessible spaces for the purposes of law enforcement” in lit. h – although social manipulation (lit. a), social scoring (lit. c), emotion recognition (lit. f) and racial profiling (lit. g) systems could also play significant roles in the future. Such (partial) prohibitions on certain use cases of AI systems are particularly interesting because they may effectively serve as intervention thresholds, thus fulfilling an important traditional function of substantive national security law provisions.

A more complex system of obligation applies to providers and deployers of high-risk AI systems. For our purposes, the most relevant high-risk AI systems are those that are listed in Annex III to the AI Act as outlined in Art. 6 (2). These include, but are not limited to certain kinds of AI systems used in biometrics (Annex III, no. 1) law enforcement (no. 6), as well as migration, asylum and border control management (no. 7). Providers and deployers of high-risk AI systems must comply with a wide-ranging catalogue of obligations (see Art. 16 for providers, Art. 26 for deployers). Providers must, inter alia, have a quality management system in place (Art. 17), keep elaborate documentation (Art. 18) and comply with the AI Act’s standards regarding data governance and quality (Art. 10), transparency (Art. 13) and human oversight (Art. 14). Compliance with these requirements must be demonstrated in a conformity assessment procedure which can, under certain conditions, be either undertaken internally or externally with the involvement of a notified body (see Art. 43 (1)). National market surveillance authorities are entrusted with robust investigative and enforcement tasks (see Art. 74 et seqq.). Non-compliance with the EU AI Act can be subject to administrative fines. For example, providers that violate their requirements vis-à-vis high-risk AI systems under Art. 16 can be fined up to EUR 15,000,000, or up to 3% of their total worldwide turnover.

The AI Act in supranational practice

The authors of this debate series explore how this regulatory framework will impact the doctrine and practice of security law. Analyzing this impact entails honing in on the interaction and interdependence of supranational standards and established national security law concepts. This relationship, it turns out, is complex and, often times, flawed.

Quite a few contributions are analytically situated at the spots where EU and national law meet. They explore the gaps and cracks of this structurally significant point of the European security architecture. Plixavra Vogiatzoglou, for instance, focuses on national security exceptions. Art. 4 (2) TEU demarcates a red line around Member States’ responsibilities for national security that the Union is not to cross. The AI Act reflects this in Art. 2 (3) and carves out from its scope AI systems insofar as they are “placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes”. Vogiatzoglou argues that this demarcation will be complicated by an as-yet unresolved tug-of-war between Member States and the Union (particularly the CJEU) around the notion of national security. Bettina Schöndorf-Haubold and Christopher Giogios highlight similar problems from the perspective of German law. They analyze how legislative powers that were originally aimed at harmonizing the internal market and data protection standards now increasingly influence substantive standards of national security law. In so doing, they investigate how EU rules establishing intervention thresholds for the use of AI systems by national security agencies can be reconciled with the European system of legislative competencies. Sarah Tas and Alice Giannini investigate supranational fragmentation in a more specific regulative domain – the (partial) prohibition on real-time RBI systems in publicly accessible spaces for the purposes of law enforcement. They argue that the lack of clarity in the interaction of EU law and national law terminology will exacerbate the prohibition’s already-existent loopholes. Johanna Hahn analyzes the impact of the AI Act’s rules regarding AI-based RBI systems on specific scenarios arising under national German security law. Going through these scenarios illuminates her critique that the logic underlying the AI Act’s regulation of such systems is normatively unconvincing.

Some contributions also examine the open questions that the EU AI Act creates for German legal practice. Sabine Gless focuses on criminal procedure law. AI systems, she writes, will increasingly be used as evidence in criminal trials. She argues that there is much conceptual work left to be done in order to make the AI Act’s promise workable in legal practice. Dieter Kugelmann and Antonia Buchmann contend that, when it comes to regulating the use of AI systems for security purposes, the AI Act can only serve as a starting point. It creates a myriad of challenges firstly for national and regional legislatures which now may need to adapt their security legislation, secondly for security agencies which now need to align their practices with the AI Act and thirdly for courts, both European and national, which are now tasked with clarifying how EU law standards tie into fundamental rights standards as established in national constitutional law, such as the German Constitutional Court’s recent decision regarding automated data analysis for the prevention of criminal acts.

Given the European security architecture’s complexity, the AI Act may inevitably be incomplete in addressing the fundamental rights risks of certain security-related AI systems. Evelien Brouwer and Niovi Vavoula, in their respective contributions, highlight the fragmented nature of the EU AI Act’s regulatory framework in another particularly sensitive area: European Migration, Asylum and Border Management. Brouwer argues that, despite acknowledging that such systems “affect persons who are often in particularly vulnerable positions” (recital 60), the EU AI Act has some considerable loopholes when it comes to addressing such vulnerabilities. In addition to exempting large-scale IT systems already in use in migration contexts until 2030, it fails to properly address risks of AI systems used for migration monitoring, analysis, prediction and resettlement. Vavoula contends that the EU AI Act’s system of substantive exceptions and procedural simplifications may prove fatal when it comes to European Migration, Asylum and Border Management. The exceptions to the prohibitions on real-time RBI systems are particularly likely to be exploited in such contexts. Providers and deployers of migration-related AI systems, she argues, may also face little external scrutiny by civil society and oversight authorities, since they are exempted from many transparency obligations and can largely rely on self-assessment procedures.

The diversity of perspectives contained in this debate series reflects how complex of a task it is to regulate the use of AI systems for security purposes and integrate such rules into the odd, imperfect and ever-evolving structure that is the European security architecture. Naturally, this debate series can only serve as a starting point for further debate on this momentous task. All readers are warmly encouraged to add their perspectives – and to stay tuned for more contributions to VB Crime and Security.


SUGGESTED CITATION  Thönnes, Christian: The EU AI Act’s Impact on Security Law: A Debate Series, VerfBlog, 2024/12/09, https://verfassungsblog.de/the-eu-ai-acts-impact-on-security-law/, DOI: 10.59704/148d0746da7c7f3c.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
AI, AI Act, AI Regulation, EU, Europarecht, European Union, KI